# Code Repair

Qwen2.5-Coder-0.5B-Instruct-AWQ
Qwen2.5 Coder 0.5B Instruct AWQ
Qwen2.5-Coder represents the latest series of the Qwen large language models, focusing on code generation, reasoning, and repair. Built on the robust foundations of Qwen2.5, with a training corpus expanded to 5.5 trillion tokens that includes source code, textual code bases, and synthetic data, Qwen2.5-Coder-32B has emerged as the leading open-source code LLM, matching the coding capabilities of GPT-4o. This model is a 4-bit instruction-tuned version of the 0.5B parameters, featuring characteristics such as causal language modeling, pre-training and fine-tuning, as well as a transformer architecture.
Code Reasoning
45.3K
Qwen2.5-Coder-3B-Instruct-GGUF
Qwen2.5 Coder 3B Instruct GGUF
Qwen2.5-Coder is the latest series of the Qwen large language models, focusing on code generation, reasoning, and repair. Built on the powerful Qwen2.5, it has been trained on a dataset of 550 trillion tokens including source code, code-grounded texts, and synthetic data. Qwen2.5-Coder-32B has emerged as the most advanced open-source code large language model, matching the coding capabilities of GPT-4o. In practical applications, it provides a more comprehensive foundation, such as a code agent, enhancing coding prowess while retaining advantages in math and general abilities.
Code Reasoning
46.4K
Qwen2.5-Coder-14B-Instruct-AWQ
Qwen2.5 Coder 14B Instruct AWQ
Qwen2.5-Coder is a series of large language models specifically designed for coding, ranging from 0.5 billion to 3.2 billion parameters to meet various developer needs. The model shows significant improvements in code generation, reasoning, and repair, leveraging a powerful Qwen2.5 framework trained on 5.5 trillion tokens, including source code, textual code bases, and synthetic data. Qwen2.5-Coder-32B is currently the most advanced open-source large language model for code generation, matching the coding capabilities of GPT-4o. Additionally, the model supports long contexts of up to 128K tokens and employs AWQ 4-bit quantization technology to enhance its efficiency and performance.
Code Reasoning
51.9K
Qwen2.5-Coder-32B-Instruct-GPTQ-Int4
Qwen2.5 Coder 32B Instruct GPTQ Int4
The Qwen2.5-Coder-32B-Instruct-GPTQ-Int4 is a large language model based on Qwen2.5, featuring 3.25 billion parameters and supporting long text processing with a maximum of 128K tokens. This model has shown significant improvements in code generation, code inference, and code repair, making it a leader among current open-source code language models. It not only enhances coding capabilities but also maintains strengths in mathematics and general reasoning.
Code Inference
48.3K
Qwen2.5-Coder-32B-Instruct-GGUF
Qwen2.5 Coder 32B Instruct GGUF
Qwen2.5-Coder is a model specifically designed for code generation, significantly improving capabilities in this area, with a variety of parameter sizes and support for quantization. It is free and enhances efficiency and quality for developers.
Code Reasoning
50.0K
Qwen2.5-Coder-0.5B
Qwen2.5 Coder 0.5B
Qwen2.5-Coder is the latest series of the Qwen large language models, focusing on code generation, code reasoning, and code repair. Built upon the powerful Qwen2.5 model, this series significantly enhances coding capabilities by adding a training token volume of 5.5 trillion, which includes source code, text code bases, and synthetic data. Qwen2.5-Coder-32B has become the current state-of-the-art open-source large language model for code, comparable in coding ability to GPT-4o. Moreover, Qwen2.5-Coder provides a more comprehensive foundation for practical applications like code agents, enhancing coding abilities while maintaining strengths in mathematics and general capabilities.
Coding Assistant
47.5K
Qwen2.5-Coder-3B-Instruct
Qwen2.5 Coder 3B Instruct
Qwen2.5-Coder is the latest series of the Qwen large language model, focused on code generation, reasoning, and repair. Based on the powerful Qwen2.5, this model series significantly enhances code generation, reasoning, and repair capabilities by increasing training tokens to 5.5 trillion, including source code, text grounding, synthetic data, and more. The Qwen2.5-Coder-3B model contains 3.09B parameters, 36 layers, 16 attention heads (Q), and 2 attention heads (KV), with a total context length of 32,768 tokens. It stands out among open-source code LLMs, matching the coding capabilities of GPT-4o, and provides developers with a powerful code assistance tool.
Coding Assistant
46.1K
Qwen2.5-Coder Technical Report
Qwen2.5 Coder Technical Report
The Qwen2.5-Coder series consists of code-specific models based on the Qwen2.5 architecture, including Qwen2.5-Coder-1.5B and Qwen2.5-Coder-7B. These models continue to be pre-trained on a massive corpus of over 5.5 trillion tokens, showcasing impressive code generation capabilities while maintaining generality through meticulous data cleaning, scalable synthetic data generation, and balanced data mixing. Qwen2.5-Coder has achieved state-of-the-art performance in over ten benchmark tests across various code-related tasks, including code generation, completion, reasoning, and repair, consistently outperforming larger models of comparable size. The release of this series not only pushes the boundaries of intelligent coding research but also encourages developers to adopt it for real-world applications through its licensing.
Coding Assistant
61.8K
Qwen2.5-Coder-14B
Qwen2.5 Coder 14B
Qwen2.5-Coder-14B is a large language model in the Qwen series focused on code, encompassing various model sizes ranging from 0.5 to 32 billion parameters to meet diverse developer needs. The model shows significant improvements in code generation, reasoning, and repair, built upon the powerful Qwen2.5, with a training token expansion to 5.5 trillion, including source code, grounded text code, and synthetic data. Qwen2.5-Coder-32B has become the leading open-source code LLM, matching the coding capacity of GPT-4o. Additionally, it provides a comprehensive foundation for real-world applications such as code agents, enhancing coding abilities while maintaining advantages in mathematics and general tasks. It supports long contexts of up to 128K tokens.
Coding Assistant
48.3K
Qwen2.5-Coder-14B-Instruct
Qwen2.5 Coder 14B Instruct
Qwen2.5-Coder-14B-Instruct is a large language model in the Qwen2.5-Coder series, focusing on code generation, reasoning, and repair. Built upon the powerful Qwen2.5, this model is trained on 5.5 trillion tokens, including source code and synthesized data, making it a leading open-source code LLM. It not only enhances coding capabilities but also maintains strengths in mathematics and general abilities while supporting long contexts of up to 128K tokens.
Coding Assistant
46.1K
Qwen2.5-Coder-32B-Instruct
Qwen2.5 Coder 32B Instruct
Qwen2.5-Coder represents a series of large language models designed specifically for code generation, featuring six mainstream model sizes with 0.5, 1.5, 3, 7, 14, and 32 billion parameters to meet diverse developers' needs. This model has made significant improvements in code generation, reasoning, and repair, built upon the robust Qwen2.5, trained on a token count expanding to 5.5 trillion, including source code, text code basics, synthetic data, and more. The Qwen2.5-Coder-32B is currently the most advanced open-source code generation large language model, rivaling the encoding capabilities of GPT-4o. It not only enhances coding abilities but also retains advantages in mathematics and general understanding, supporting long contexts of up to 128K tokens.
Coding Assistant
46.1K
Aide.dev
Aide.dev
Aide is an open-source AI-native integrated development environment (IDE) that runs on Swebench-lite using the latest agentic framework. It can provide code repair suggestions or inquire about potentially missing files. Aide achieves this by iterating through linter errors and leveraging LSP tools (such as 'Go to references') to pull relevant context. The key advantages of Aide include developer control, the experience of pairing with real engineers, quick invocation, and local-first intelligent processing. Aide aims to address maintainability and accuracy issues in AI editing within large codebases, resolving 43% of problems through SWE-Bench Lite testing, thus becoming the current best solution.
Development & Tools
115.4K
WaveCoder
Wavecoder
WaveCoder is a large language model for code developed by Microsoft Research Asia. It enhances the versatility and functionality of code language models through instruction fine-tuning. The model demonstrates exceptional performance across various programming tasks, including code summarization, generation, translation, and repair. Its innovation lies in the data synthesis framework and two-stage instruction data generation strategy, ensuring high quality and diversity of data. The model's open-source nature provides developers with a powerful coding assistance tool, contributing to increased development efficiency and code quality.
AI Code Generation
48.3K
Fresh Picks
Qwen2.5-Coder
Qwen2.5 Coder
Qwen2.5-Coder is part of the Qwen2.5 open-source family, focusing on tasks like code generation, inference, and repair. By leveraging large-scale code training data, it improves code capability while maintaining mathematical and general skills. The model supports 92 programming languages and has shown significant advancements in code-related tasks. Qwen2.5-Coder is licensed under Apache 2.0 to accelerate the application of coding intelligence.
AI Code Assistant
60.7K
Promind
Promind
ProMind.AI is a content generation tool based on the OpenAI GPT-3 and GPT-4 models. It can generate tweets, blog posts, LinkedIn posts, YouTube scripts, and more. This tool can help you fix code errors, generate code, and save time. ProMind.AI has powerful features that can help you improve your writing efficiency.
Writing Assistant
51.6K
@SummariseThis
@summarisethis
ProMindGPT is a content generation tool based on the OpenAI GPT-3 and GPT-4 models. It can help users generate a variety of text content, including tweets, blog posts, LinkedIn posts, and YouTube scripts. ProMindGPT also assists in fixing code errors and generating code snippets, saving users time and effort.
Writing Assistant
42.0K
MagickPen
Magickpen
MagickPen is a ChatGPT-powered AI writing assistant that can quickly generate articles, essays, reports, stories, advertisements, and even jokes. It also includes translation, grammar checking, and code repair features to enhance your writing abilities. MagickPen offers a free trial and paid plans, including pay-as-you-go and subscription options.
Writing Assistant
49.1K
ProMind.ai
Promind.ai
ProMind.AI is a content generation tool powered by OpenAI's GPT-3 and GPT-4 models. It can help you generate tweets, blog posts, LinkedIn posts, YouTube scripts, and more. It can also assist in fixing code errors and generating code, saving you valuable time. ProMind.AI boasts powerful features and a user-friendly interface to cater to all your writing needs. Pricing varies based on usage; for detailed information, please visit our official website.
AI content generation
43.9K
Featured AI Tools
Flow AI
Flow AI
Flow is an AI-driven movie-making tool designed for creators, utilizing Google DeepMind's advanced models to allow users to easily create excellent movie clips, scenes, and stories. The tool provides a seamless creative experience, supporting user-defined assets or generating content within Flow. In terms of pricing, the Google AI Pro and Google AI Ultra plans offer different functionalities suitable for various user needs.
Video Production
42.8K
NoCode
Nocode
NoCode is a platform that requires no programming experience, allowing users to quickly generate applications by describing their ideas in natural language, aiming to lower development barriers so more people can realize their ideas. The platform provides real-time previews and one-click deployment features, making it very suitable for non-technical users to turn their ideas into reality.
Development Platform
44.7K
ListenHub
Listenhub
ListenHub is a lightweight AI podcast generation tool that supports both Chinese and English. Based on cutting-edge AI technology, it can quickly generate podcast content of interest to users. Its main advantages include natural dialogue and ultra-realistic voice effects, allowing users to enjoy high-quality auditory experiences anytime and anywhere. ListenHub not only improves the speed of content generation but also offers compatibility with mobile devices, making it convenient for users to use in different settings. The product is positioned as an efficient information acquisition tool, suitable for the needs of a wide range of listeners.
AI
42.2K
MiniMax Agent
Minimax Agent
MiniMax Agent is an intelligent AI companion that adopts the latest multimodal technology. The MCP multi-agent collaboration enables AI teams to efficiently solve complex problems. It provides features such as instant answers, visual analysis, and voice interaction, which can increase productivity by 10 times.
Multimodal technology
43.1K
Chinese Picks
Tencent Hunyuan Image 2.0
Tencent Hunyuan Image 2.0
Tencent Hunyuan Image 2.0 is Tencent's latest released AI image generation model, significantly improving generation speed and image quality. With a super-high compression ratio codec and new diffusion architecture, image generation speed can reach milliseconds, avoiding the waiting time of traditional generation. At the same time, the model improves the realism and detail representation of images through the combination of reinforcement learning algorithms and human aesthetic knowledge, suitable for professional users such as designers and creators.
Image Generation
42.2K
OpenMemory MCP
Openmemory MCP
OpenMemory is an open-source personal memory layer that provides private, portable memory management for large language models (LLMs). It ensures users have full control over their data, maintaining its security when building AI applications. This project supports Docker, Python, and Node.js, making it suitable for developers seeking personalized AI experiences. OpenMemory is particularly suited for users who wish to use AI without revealing personal information.
open source
42.8K
FastVLM
Fastvlm
FastVLM is an efficient visual encoding model designed specifically for visual language models. It uses the innovative FastViTHD hybrid visual encoder to reduce the time required for encoding high-resolution images and the number of output tokens, resulting in excellent performance in both speed and accuracy. FastVLM is primarily positioned to provide developers with powerful visual language processing capabilities, applicable to various scenarios, particularly performing excellently on mobile devices that require rapid response.
Image Processing
41.4K
Chinese Picks
LiblibAI
Liblibai
LiblibAI is a leading Chinese AI creative platform offering powerful AI creative tools to help creators bring their imagination to life. The platform provides a vast library of free AI creative models, allowing users to search and utilize these models for image, text, and audio creations. Users can also train their own AI models on the platform. Focused on the diverse needs of creators, LiblibAI is committed to creating inclusive conditions and serving the creative industry, ensuring that everyone can enjoy the joy of creation.
AI Model
6.9M
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase